In this paper, we propose a novel neural network model called RNNEncoder-Decoder that consists of two recurrent neural networks (RNN). One RNNencodes a sequence of symbols into a fixed-length vector representation, andthe other decodes the representation into another sequence of symbols. Theencoder and decoder of the proposed model are jointly trained to maximize theconditional probability of a target sequence given a source sequence. Theperformance of a statistical machine translation system is empirically found toimprove by using the conditional probabilities of phrase pairs computed by theRNN Encoder-Decoder as an additional feature in the existing log-linear model.Qualitatively, we show that the proposed model learns a semantically andsyntactically meaningful representation of linguistic phrases.
展开▼